22 research outputs found

    Concurrent Learning-Based Neuro-Adaptive Robust Tracking Control of Wheeled Mobile Robot: An Event-Triggered Design

    Get PDF
    In this paper, an event-based neuro-adaptive robust tracking controller for a perturbed and networked differential drive mobile robot (DMR) is designed with concurrent learning. A radial basis function neural network, which approximates an unknown perturbation, is used to design an adaptive sliding mode controller (SMC). The RBFNN weights and SMC parameters are estimated online using an adaptive tuning law to ensure performance with reduced chattering. To improve the convergence of RBFNN weight estimation error, a concurrent learning-based adaptive law is derived, which uses measured online and recorded data. Further, a suitable triggering condition is designed to achieve a reduced number of control computations while minimizing network resources without sacrificing the stability of the sampled data closed-loop control system. A finite sampling frequency is guaranteed for the designed triggering condition by establishing a positive lower bound on the inter-event execution time which is equivalent to the Zeno-free behavior of the system. Finally, the proposed event-based neuro-adaptive robust controller is implemented on a practical system (Q-bot 2e) to show the effectiveness of the proposed design

    Design and Analysis of a Low-profile Microstrip Antenna for 5G Applications using AI-based PSO Approach, Journal of Telecommunications and Information Technology, 2023, nr 3

    Get PDF
    Microstrip antennas are high gain aerials for low-profile wireless applications working with frequencies over 100 MHz. This paper presents a study and design of a low cost slotted-type microstrip patch antenna that can be used in 5G millimeter wave applications. This research focuses on the effect of ground slots and patch slots which, in turn, affect different antenna parameters, such as return loss, VSWR, gain, radiation pattern, and axial ratio. The working frequency range varies from 24 to 28 GHz, thus falling within 5G specifications. A subset of artificial intelligence (AI) known as particle swarm optimization (PSO) is used to approximatively solve issues involving maximization and minimization of numerical values, being highly challenging or even impossible to solve in a precise manner. Here, we have designed and analyzed a low-profile printed microstrip antenna for 5G applications using the AI-based PSO approach. The novelty of the research is mainly in the design approach, compactness of size and antenna applicability. The antenna was simulated with the use of HFSS simulation software

    Specific heat, entropy and magnetic properties of high

    No full text
    The finite temperature properties of the high Tc cuprates are investigated by an exact method at optimal doping using t−tâ€Č−J−V model. The role of next-nearest-neighbor (NNN) hopping interaction tâ€Č and nearest-neighbor Coulomb repulsion V on the total energy, specific heat, entropy, magnetic properties, etc. in the superconducting, as well as normal phase, are considered. Specific heat curves show a single peak structure in the parameter range suitable for existence of superconducting phase. Two peak structure in the specific heat curve is observed at sufficiently large values of V∕t. An asymmetry in specific heat curves and peak positions is observed for the hole- and electron-doped cuprates. Existence of a metallic phase is detected for positive tâ€Č∕t for V∕t ≀ 4J. Entropy calculation shows the system goes to a more disordered state with negative tâ€Č∕t and V∕t. A non-Fermi liquid behavior is revealed at low temperatures for positive tâ€Č and small values of V . An asymmetry in Neel temperature is observed for the hole- and electron-doped cuprates. An unsaturated ferromagnetic phase emerges with an increase of V∕t. Schematic magnetic phase diagrams are shown

    Hole pairing and ground state properties of high-

    No full text
    The t–tâ€Č–J–V model, one of the realistic models for studying high-Tc cuprates, has been investigated to explore the hole pairing and other ground state properties using exact diagonalization (ED) technique with 2 holes in a small 8-site cluster. The role of next-nearest-neighbor (NNN) hopping and nearest-neighbor (NN) Coulomb repulsion has been considered. It appears that qualitative behavior of the ground state energies of an 8-site and 16- or 18-site cluster is similar. Results show that a small short-ranged antiferromagnetic (AF) correlation exists in the 2 hole case which is favored by large V∕t. A superconducting phase emerges at 0 ≀ V∕t ≀ 4J. Hole–hole correlation calculation also suggests that the two holes of the pair are either at |i − j| = 1 or √2. Negative tâ€Č∕t suppresses the possibility of pairing of holes. Though s-wave pairing susceptibility is dominant, pairing correlation length calculation indicates that the long range pairing, which is suitable for superconductivity, is in the d-wave channel. Both s- and d-wave pairing susceptibility gets suppressed by V∕t while d-(s-) wave susceptibility gets favored (suppressed) by tâ€Č∕t. The charge gap shows a gapped behavior while a spin-gapless region exists at small V∕t for finite tâ€Č∕t

    NOVEL BAND-REJECT FILTER DESIGN USING MULTILAYER BRAGG MIRROR AT 1550 NM

    No full text
    Novel band-reject filter is proposed using multilayer Bragg mirror structure by computing reflection coefficient at 1550 nm wavelength for optical communication. Dimension of different layers and material composition are modified to study the effect on rejection bandwidth, and no. of layers is also varied for analyzing passband characteristics. GaN/AlxGa1-xN composiiton is taken as the choice for simulation purpose, carried out using propagation matrix method. Refractive indices of the materials are considered as function of bandgap, operating wavelength and material composition following Adachi’s model. One interesting result arises from the computation that band-reject filter may be converted into band-pass one by suitably varying ratio of thicknesses of unit cell, or by varying Al mole fraction. Simulated results can be utilised to design VCSEL mirror as optical transmitter

    Highly-parallelized simulation of a pixelated LArTPC on a GPU

    No full text
    The rapid development of general-purpose computing on graphics processing units (GPGPU) is allowing the implementation of highly-parallelized Monte Carlo simulation chains for particle physics experiments. This technique is particularly suitable for the simulation of a pixelated charge readout for time projection chambers, given the large number of channels that this technology employs. Here we present the first implementation of a full microphysical simulator of a liquid argon time projection chamber (LArTPC) equipped with light readout and pixelated charge readout, developed for the DUNE Near Detector. The software is implemented with an end-to-end set of GPU-optimized algorithms. The algorithms have been written in Python and translated into CUDA kernels using Numba, a just-in-time compiler for a subset of Python and NumPy instructions. The GPU implementation achieves a speed up of four orders of magnitude compared with the equivalent CPU version. The simulation of the current induced on 10310^3 pixels takes around 1 ms on the GPU, compared with approximately 10 s on the CPU. The results of the simulation are compared against data from a pixel-readout LArTPC prototype

    Highly-parallelized simulation of a pixelated LArTPC on a GPU

    No full text
    The rapid development of general-purpose computing on graphics processing units (GPGPU) is allowing the implementation of highly-parallelized Monte Carlo simulation chains for particle physics experiments. This technique is particularly suitable for the simulation of a pixelated charge readout for time projection chambers, given the large number of channels that this technology employs. Here we present the first implementation of a full microphysical simulator of a liquid argon time projection chamber (LArTPC) equipped with light readout and pixelated charge readout, developed for the DUNE Near Detector. The software is implemented with an end-to-end set of GPU-optimized algorithms. The algorithms have been written in Python and translated into CUDA kernels using Numba, a just-in-time compiler for a subset of Python and NumPy instructions. The GPU implementation achieves a speed up of four orders of magnitude compared with the equivalent CPU version. The simulation of the current induced on 10310^3 pixels takes around 1 ms on the GPU, compared with approximately 10 s on the CPU. The results of the simulation are compared against data from a pixel-readout LArTPC prototype

    DUNE Offline Computing Conceptual Design Report

    No full text
    This document describes Offline Software and Computing for the Deep Underground Neutrino Experiment (DUNE) experiment, in particular, the conceptual design of the offline computing needed to accomplish its physics goals. Our emphasis in this document is the development of the computing infrastructure needed to acquire, catalog, reconstruct, simulate and analyze the data from the DUNE experiment and its prototypes. In this effort, we concentrate on developing the tools and systems that facilitate the development and deployment of advanced algorithms. Rather than prescribing particular algorithms, our goal is to provide resources that are flexible and accessible enough to support creative software solutions as HEP computing evolves and to provide computing that achieves the physics goals of the DUNE experiment.This document describes the conceptual design for the Offline Software and Computing for the Deep Underground Neutrino Experiment (DUNE). The goals of the experiment include 1) studying neutrino oscillations using a beam of neutrinos sent from Fermilab in Illinois to the Sanford Underground Research Facility (SURF) in Lead, South Dakota, 2) studying astrophysical neutrino sources and rare processes and 3) understanding the physics of neutrino interactions in matter. We describe the development of the computing infrastructure needed to achieve the physics goals of the experiment by storing, cataloging, reconstructing, simulating, and analyzing ∌\sim 30 PB of data/year from DUNE and its prototypes. Rather than prescribing particular algorithms, our goal is to provide resources that are flexible and accessible enough to support creative software solutions and advanced algorithms as HEP computing evolves. We describe the physics objectives, organization, use cases, and proposed technical solutions
    corecore